26 research outputs found

    Consistent ICP for the registration of sparse and inhomogeneous point clouds

    Get PDF
    In this paper, we derive a novel iterative closest point (ICP) technique that performs point cloud alignment in a robust and consistent way. Traditional ICP techniques minimize the point-to-point distances, which are successful when point clouds contain no noise or clutter and moreover are dense and more or less uniformly sampled. In the other case, it is better to employ point-to-plane or other metrics to locally approximate the surface of the objects. However, the point-to-plane metric does not yield a symmetric solution, i.e. the estimated transformation of point cloud p to point cloud q is not necessarily equal to the inverse transformation of point cloud q to point cloud p. In order to improve ICP, we will enforce such symmetry constraints as prior knowledge and make it also robust to noise and clutter. Experimental results show that our method is indeed much more consistent and accurate in presence of noise and clutter compared to existing ICP algorithms

    Towards online mobile mapping using inhomogeneous lidar data

    Get PDF
    In this paper we present a novel approach to quickly obtain detailed 3D reconstructions of large scale environments. The method is based on the consecutive registration of 3D point clouds generated by modern lidar scanners such as the Velodyne HDL-32e or HDL-64e. The main contribution of this work is that the proposed system specifically deals with the problem of sparsity and inhomogeneity of the point clouds typically produced by these scanners. More specifically, we combine the simplicity of the traditional iterative closest point (ICP) algorithm with the analysis of the underlying surface of each point in a local neighbourhood. The algorithm was evaluated on our own collected dataset captured with accurate ground truth. The experiments demonstrate that the system is producing highly detailed 3D maps at the speed of 10 sensor frames per second

    Have I seen this place before? A fast and robust loop detection and correction method for 3D Lidar SLAM

    Get PDF
    In this paper, we present a complete loop detection and correction system developed for data originating from lidar scanners. Regarding detection, we propose a combination of a global point cloud matcher with a novel registration algorithm to determine loop candidates in a highly effective way. The registration method can deal with point clouds that are largely deviating in orientation while improving the efficiency over existing techniques. In addition, we accelerated the computation of the global point cloud matcher by a factor of 2–4, exploiting the GPU to its maximum. Experiments demonstrated that our combined approach more reliably detects loops in lidar data compared to other point cloud matchers as it leads to better precision–recall trade-offs: for nearly 100% recall, we gain up to 7% in precision. Finally, we present a novel loop correction algorithm that leads to an improvement by a factor of 2 on the average and median pose error, while at the same time only requires a handful of seconds to complete

    Surface-based GICP

    Get PDF
    In this paper we present an extension of the Generalized ICP algorithm for the registration of point clouds for use in lidar-based SLAM applications. As opposed to the plane-to-plane cost function, which assumes that each point set is locally planar, we propose to incorporate additional information on the underlying surface into the GICP process. Doing so, we are able to deal better with the artefacts that are typically present in lidar point clouds, including an inhomogeneous and sparse point density, noise and missing data. Experiments on lidar sequences of the KITTI benchmark demonstrate that we are able to substantially reduce the positional error compared to the original GICP algorithm

    Indoor assistance for visually impaired people using a RGB-D camera

    Get PDF
    In this paper a navigational aid for visually impaired people is presented. The system uses a RGB-D camera to perceive the environment and implements self-localization, obstacle detection and obstacle classification. The novelty of this work is threefold. First, self-localization is performed by means of a novel camera tracking approach that uses both depth and color information. Second, to provide the user with semantic information, obstacles are classified as walls, doors, steps and a residual class that covers isolated objects and bumpy parts on the floor. Third, in order to guarantee real time performance, the system is accelerated by offloading parallel operations to the GPU. Experiments demonstrate that the whole system is running at 9 Hz

    A markerless 3D tracking approach for augmented reality applications

    Get PDF

    Solar panel detection within complex backgrounds using thermal images acquired by UAVs

    Get PDF
    The installation of solar plants everywhere in the world increases year by year. Automated diagnostic methods are needed to inspect the solar plants and to identify anomalies within these photovoltaic panels. The inspection is usually carried out by unmanned aerial vehicles (UAVs) using thermal imaging sensors. The first step in the whole process is to detect the solar panels in those images. However, standard image processing techniques fail in case of low-contrast images or images with complex backgrounds. Moreover, the shades of power lines or structures similar to solar panels impede the automated detection process. In this research, two self-developed methods are compared for the detection of panels in this context, one based on classical techniques and another one based on deep learning, both with a common post-processing step. The first method is based on edge detection and classification, in contrast to the second method is based on training a region based convolutional neural networks to identify a panel. The first method corrects for the low contrast of the thermal image using several preprocessing techniques. Subsequently, edge detection, segmentation and segment classification are applied. The latter is done using a support vector machine trained with an optimized texture descriptor vector. The second method is based on deep learning trained with images that have been subjected to three different pre-processing operations. The postprocessing use the detected panels to infer the location of panels that were not detected. This step selects contours from detected panels based on the panel area and the angle of rotation. Then new panels are determined by the extrapolation of these contours. The panels in 100 random images taken from eleven UAV flights over three solar plants are labeled and used to evaluate the detection methods. The metrics for the new method based on classical techniques reaches a precision of 0.997, a recall of 0.970 and a F1 score of 0.983. The metrics for the method of deep learning reaches a precision of 0.996, a recall of 0.981 and a F1 score of 0.989. The two panel detection methods are highly effective in the presence of complex backgrounds
    corecore